3 research outputs found

    Video-Based Activity Recognition for Automated Motor Assessment of Parkinson's Disease

    Get PDF
    Over the last decade, video-enabled mobile devices have become ubiquitous, while advances in markerless pose estimation allow an individual's body position to be tracked accurately and efficiently across the frames of a video. Previous work by this and other groups has shown that pose-extracted kinematic features can be used to reliably measure motor impairment in Parkinson's disease (PD). This presents the prospect of developing an asynchronous and scalable, video-based assessment of motor dysfunction. Crucial to this endeavour is the ability to automatically recognise the class of an action being performed, without which manual labelling is required. Representing the evolution of body joint locations as a spatio-temporal graph, we implement a deep-learning model for video and frame-level classification of activities performed according to part 3 of the Movement Disorder Society Unified PD Rating Scale (MDS-UPDRS). We train and validate this system using a dataset of n = 7310 video clips, recorded at 5 independent sites. This approach reaches human-level performance in detecting and classifying periods of activity within monocular video clips. Our framework could support clinical workflows and patient care at scale through applications such as quality monitoring of clinical data collection, automated labelling of video streams, or a module within a remote self-assessment system

    An Evaluation of KELVIN, an Artificial Intelligence Platform, as an Objective Assessment of the MDS UPDRS Part III

    Get PDF
    BACKGROUND: Parkinson's disease severity is typically measured using the Movement Disorder Society Unified Parkinson's disease rating scale (MDS-UPDRS). While training for this scale exists, users may vary in how they score a patient with the consequence of intra-rater and inter-rater variability. OBJECTIVE: In this study we explored the consistency of an artificial intelligence platform compared with traditional clinical scoring in the assessment of motor severity in PD. METHODS: Twenty-two PD patients underwent simultaneous MDS-UPDRS scoring by two experienced MDS-UPDRS raters and the two sets of accompanying video footage were also scored by an artificial intelligence video analysis platform known as KELVIN. RESULTS: KELVIN was able to produce a summary score for 7 MDS-UPDRS part 3 items with good inter-rater reliability (Intraclass Correlation Coefficient (ICC) 0.80 in the OFF-medication state, ICC 0.73 in the ON-medication state). Clinician scores had exceptionally high levels of inter-rater reliability in both the OFF (0.99) and ON (0.94) medication conditions (possibly reflecting the highly experienced team). There was an ICC of 0.84 in the OFF-medication state and 0.31 in the ON-medication state between the mean Clinician and mean Kelvin scores for the equivalent 7 motor items, possibly due to dyskinesia impacting on the KELVIN scores. CONCLUSION: We conclude that KELVIN may prove useful in the capture and scoring of multiple items of MDS-UPDRS part 3 with levels of consistency not far short of that achieved by experienced MDS-UPDRS clinical raters, and is worthy of further investigation

    Video-based activity recognition for automated motor assessment of Parkinson’s disease

    No full text
    Over the last decade, video-enabled mobile devices have become almost ubiquitous, while advances in markerless pose estimation allow an individual's body position to be tracked across the frames of a video. Previous work by this and other groups has shown that pose-extracted kinematic features can be used to reliably measure motor impairment in Parkinson's disease (PD). This presents the prospect of developing an asynchronous and scalable, video-based assessment of motor dysfunction. Crucial to this endeavour is the ability to automatically recognise the class of an action being performed, without which manual labelling is required. Representing the evolution of body joint locations as a spatio-temporal graph, we implement a deep-learning model for frame-level classification of activities performed according to part 3 of the Movement Disorder Society Unified PD Rating Scale (MDS-UPDRS). We train and validate this system using a dataset of n=7220 video clips, recorded at 5 independent sites. This approach achieves human-level performance in classifying and labelling periods of activity within monocular video clips. Our framework could support clinical workflows and patient care at scale through applications such as quality monitoring of clinical data collection, automated labelling of video streams, or a module within a remote self-assessment system.</p
    corecore